-
Notifications
You must be signed in to change notification settings - Fork 20.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
les: polish code #22625
les: polish code #22625
Conversation
pp.activeQueue.Push(c) | ||
pp.enforceLimits() | ||
if c.tempCapacity > 0 { | ||
commit = true | ||
c.bias = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not an identical change here and I am not sure about the potential effects. Until now, the bias was always only applied to one node that has been inserted or whose capacity was increased. It is meant to protect against quick oscillations so we always use it against the node that is trying to push out others from the pool. Once we decide to grant it the space it needs, it's no longer biased. Now tryActivate
can activate multiple clients and in your version the previously activated ones stay biased until we finish the entire operation. This means that further inactive nodes can very easily deactivate recently activated ones if their priorities are very close to each other because they are both biased and the already activated one does not have an advantage against the latest activation attempt. I'd prefer to keep the old and thoroughly tested behavior here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha. It's my mistake, will fix it.
@@ -151,19 +153,14 @@ func (h *serverHandler) handle(p *clientPeer) error { | |||
p.Log().Debug("Client pool already closed") | |||
return p2p.DiscRequested | |||
} | |||
activeCount, _ := h.server.clientPool.Active() | |||
clientConnectionGauge.Update(int64(activeCount)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you remove these updates? Do you think this metric is useless? Btw the metric itself is still there is metrics.go
so this looks wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because they are already counted in the clientpool internal. https://github.com/ethereum/go-ethereum/blob/master/les/vflux/server/clientpool.go#L154
@zsfelfoldi I think all the issues are resolved, please take another look! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If clientConnectionGauge
is not updated in package les
any more then we should remove it. Otherwise LGTM.
* les: polish code * les/vflus/server: fixes * les: fix lint
This PR polishes the les internal a bit. Nothing important changed.